Goto

Collaborating Authors

 cil task




RMM: Reinforced Memory Management for Class-Incremental Learning Supplementary Materials

Neural Information Processing Systems

This is supplementary to Section 5.2 " This is supplementary to Section 5.2 " This is supplementary to to Section 5.1 " This is supplementary to Section 5.2 " To evaluate the performance of our RMM in unknown scenarios, we supplemented the experiments of using the policy functions trained "in distinct numbers of phases" and "on different datasets" and show the testing results of CIFAR-100 in Table S5. ", using the policy learned on "ImageNet-Subset, For example, Row 5 is for training the policy on "ImageNet-Subset, This is supplementary to Section 5.2 " In Table S6, we can see the clear improvements, e.g., ImageNet-Subset [4]) are available in Table S7. No. Method Policy learned on N =5 N =10 N =25 1 Baseline - 49.02 44.59 38.23 2 w/ RMM ImageNet-Subset 53.15 50.05 42.89 This is supplementary to Section 5.2 " This is supplementary to Section 5.2 " We run our experiments using GPU workstations as follows, 4 No. Method Memory budget of exemplars N =5 N =10 N =25 1 Baseline 1000 64.31 60.97 58.77 2 w/ RMM (ours) 1000 68.20 65.57 Row 1 (baseline) is from the sota method POD-AANets [10].



eCIL-MU: Embedding based Class Incremental Learning and Machine Unlearning

Zuo, Zhiwei, Tang, Zhuo, Wang, Bin, Li, Kenli, Datta, Anwitaman

arXiv.org Artificial Intelligence

New categories may be introduced over time, or existing categories may need to be reclassified. Class incremental learning (CIL) is employed for the gradual acquisition of knowledge about new categories while preserving information about previously learned ones in such dynamic environments. It might also be necessary to also eliminate the influence of related categories on the model to adapt to reclassification. We thus introduce class-level machine unlearning (MU) within CIL. Typically, MU methods tend to be time-consuming and can potentially harm the model's performance. A continuous stream of unlearning requests could lead to catastrophic forgetting. To address these issues, we propose a non-destructive eCIL-MU framework based on embedding techniques to map data into vectors and then be stored in vector databases. Our approach exploits the overlap between CIL and MU tasks for acceleration. Experiments demonstrate the capability of achieving unlearning effectiveness and orders of magnitude (upto $\sim 278\times$) of acceleration.